205 research outputs found

    Artificial scientists

    No full text

    entailment engines in

    Get PDF
    does it payoff to use sophisticate

    Louise: A Meta-Interpretive Learner for Efficient Multi-clause Learning of Large Programs

    Get PDF
    We present Louise, a new Meta-Interpretive Learner that performs efficient multi-clause learning, implemented in Prolog. Louise is efficient enough to learn programs that are too large to be learned with the current state-of-the-art MIL system, Metagol. Louise learns by first constructing the most general program in the hypothesis space of a MIL problem and then reducing this "Top program" by Plotkin's program reduction algorithm. In this extended abstract we describe Louise's learning approach and experimentally demonstrate that Louise can learn programs that are too large to be learned by our implementation of Metagol, Thelma

    Inductive Acquisition of Expert Knowledge

    Get PDF
    Expert systems divide neatly into two categories: those in which ( 1) the expert decisions result in changes to some external environment (control systems), and (2) the expert decisions merely seek to describe the environment (classification systems). Both the explanation of computer-based reasoning and the "bottleneck" (Feigenbaum, 1979) of knowledge acquisition are major issues in expert systems research. We have contributed to these areas of research in two ways. Firstly, we have implemented an expert system shell, the Mugol environment, which facilitates knowledge acquisition by inductive inference and provides automatic explanation of run-time reasoning on demand. RuleMaster, a commercial version of this environment, has been used to advantage industrially in the construction and testing of two large classification systems. Secondly, we have investigated a new technique called sequence induction which can be used in the construction of control systems. Sequence induction is based on theoretical work in grammatical learning. We have improved existing grammatical learning algorithms as well as suggesting and theoretically characterising new ones. These algorithms have been successfully applied to the acquisition of knowledge for a diverse set of control systems, including inductive construction of robot plans and chess end-game strategies

    Explanatory machine learning for sequential human teaching

    Full text link
    The topic of comprehensibility of machine-learned theories has recently drawn increasing attention. Inductive Logic Programming (ILP) uses logic programming to derive logic theories from small data based on abduction and induction techniques. Learned theories are represented in the form of rules as declarative descriptions of obtained knowledge. In earlier work, the authors provided the first evidence of a measurable increase in human comprehension based on machine-learned logic rules for simple classification tasks. In a later study, it was found that the presentation of machine-learned explanations to humans can produce both beneficial and harmful effects in the context of game learning. We continue our investigation of comprehensibility by examining the effects of the ordering of concept presentations on human comprehension. In this work, we examine the explanatory effects of curriculum order and the presence of machine-learned explanations for sequential problem-solving. We show that 1) there exist tasks A and B such that learning A before B has a better human comprehension with respect to learning B before A and 2) there exist tasks A and B such that the presence of explanations when learning A contributes to improved human comprehension when subsequently learning B. We propose a framework for the effects of sequential teaching on comprehension based on an existing definition of comprehensibility and provide evidence for support from data collected in human trials. Empirical results show that sequential teaching of concepts with increasing complexity a) has a beneficial effect on human comprehension and b) leads to human re-discovery of divide-and-conquer problem-solving strategies, and c) studying machine-learned explanations allows adaptations of human problem-solving strategy with better performance.Comment: Submitted to the International Joint Conference on Learning & Reasoning (IJCLR) 202

    Inductive logic programming at 30

    Full text link
    Inductive logic programming (ILP) is a form of logic-based machine learning. The goal of ILP is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we survey recent work in the field. In this survey, we focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs that generalise from few examples, (iii) new approaches for predicate invention, and (iv) the use of different technologies, notably answer set programming and neural networks. We conclude by discussing some of the current limitations of ILP and discuss directions for future research.Comment: Extension of IJCAI20 survey paper. arXiv admin note: substantial text overlap with arXiv:2002.11002, arXiv:2008.0791
    corecore